51 research outputs found

    Dynamic affordability assessment: predicting an applicant’s ability to repay over the life of the loan

    No full text
    In the credit decision-making process, both an applicant's creditworthiness and their affordability should be assessed. While credit scoring focuses on creditworthiness, affordability is often checked on the basis of current income and estimated current consumption as well as existing debts stated in a credit report. Contrary to that static approach, a theoretical framework for dynamic affordability assessment is proposed in this paper. In this approach, both income and consumption are allowed to vary over time and their changes are described with random effects models for panel data. The models are derived from the economic literature, including the Euler equation of consumption. A simulation is run on their basis and predicted time series are generated for a given applicant. For each pair of the predicted income and consumption time series, the applicant's ability to repay is checked over the life of the loan, for all possible instalment amounts. As a result, a probability of default is assigned to each amount, which can help find the maximum affordable instalment. This is illustrated with an example based on artificial data. Assessing affordability over the loan repayment period as well as taking into account variability of income and expenditure over time are in line with recommendations of the UK Office of Fair Trading and the Financial Services Authority. In practice, the suggested approach could contribute to responsible lending

    Novel concept of polymers preparation with high photoluminescent quantum yield

    Get PDF
    A series carbazolyl-containing polymers were synthesized by anionic polymerization of various oxiranes and methyl methacrylate. The polymerization was carried out using as initiator carbazylpotassium activated 18-crown-6 in THF. The polymers were prepared and found using size exclusion chromatography to have a degree of polymerization (DPn) about 20 relatively and low dispersity in the range of 1.07–1.66. Their optical properties were investigated by means of UV–vis and photoluminescence spectroscopies. The obtained polymers emitted light with maximum emission about 370 nm and high quantum yield ranging up to 79 %. Thus, it was confirmed that the utilization of fluorophore initiator for polymerization of non photoresponsive monomers is quite efficient for the preparation of photoluminescent polymers

    Does segmentation always improve model performance in credit scoring?

    No full text
    Credit scoring allows for the credit risk assessment of bank customers. A single scoring model (scorecard) can be developed for the entire customer population, e.g. using logistic regression. However, it is often expected that segmentation, i.e. dividing the population into several groups and building separate scorecards for them, will improve the model performance. The most common statistical methods for segmentation are the two-step approaches, where logistic regression follows Classification and Regression Trees (CART) or Chi-squared Automatic Interaction Detection (CHAID) trees etc. In this research, the two-step approaches are applied as well as a new, simultaneous method, in which both segmentation and scorecards are optimised at the same time: Logistic Trees with Unbiased Selection (LOTUS). For reference purposes, a single-scorecard model is used. The above-mentioned methods are applied to the data provided by two of the major UK banks and one of the European credit bureaus. The model performance measures are then compared to examine whether there is improvement due to the segmentation methods used. It is found that segmentation does not always improve model performance in credit scoring: for none of the analysed real-world datasets, the multi-scorecard models perform considerably better than the single-scorecard ones. Moreover, in this application, there is no difference in performance between the two-step and simultaneous approache

    Factors determining acceptance of illness in patients with arterial hypertension and comorbidities

    Get PDF
    Background: Hypertension is one of the most common chronic diseases. The need to undergo indefinite treatment combined with the risk of complications affecting the cardiovascular system impose significant psychological and somatic burden on the patient. Arterial hypertension (AH) is rarely an isolated disease and the most commonly observed comorbidities include metabolic disorders as well as clinically apparent complications associated with polypharmacy, which increases the risk of drug‑induced adverse events. Aims: The aim of the study was to determine factors that have an impact on illness acceptance in patients with AH. Methods: The study included 532 patients diagnosed with AH. A standardized Acceptance of Illness Scale questionnaire and a questionnaire prepared by the authors were used. The Acceptance of Illness Scale allows to classify the illness acceptance as high (30–40 points), average (19–29 points), or low (8–18 points). Results: A high level of illness acceptance was noted in 45% of participants and an average level in 46%. Patients with different levels of illness acceptance showed disparities in: duration of AH, number of cardiovascular and all diseases, frequency of mental disorders, and number of drugs taken. The number of cardiovascular diseases was significantly lower in patients with high levels of illness acceptance than in those with poor acceptance. Disease duration in patients with a high level of illness acceptance was significantly shorter than in patients with average acceptance. Conclusions: The level of illness acceptance is correlated with disease duration, number of diseases, and number of medications taken

    Population and labour force projections for 27 European countries, 2002-052: impact of international migration on population ageing: Projections de population et de population active pour 27 pays européens 2002-052: impact de la migration internationale sur le vieillissement de la population

    Get PDF
    Population and labour force projections are made for 27 selected European countries for 2002-052, focussing on the impact of international migration on population and labour force dynamics. Starting from single scenarios for fertility, mortality and economic activity, three sets of assumptions are explored regarding migration flows, taking into account probable policy developments in Europe following the enlargement of the EU. In addition to age structures, various support ratio indicators are analysed. The results indicate that plausible immigration cannot offset the negative effects of population and labour force ageing

    (Photo)physical properties of new molecular glasses end-capped with thiophene rings composed of diimide and imine units

    Get PDF
    New symmetrical arylene bisimide derivatives formed by using electron-donating-electron-accepting systems were synthesized. They consist of a phthalic diimide or naphthalenediimide core and imine linkages and are end-capped with thiophene, bithiophene, and (ethylenedioxy)thiophene units. Moreover, polymers were obtained from a new diamine, N,N′-bis(5- aminonaphthalenyl)naphthalene-1,4,5,8-dicarboximide and 2,5- thiophenedicarboxaldehyde or 2,2′-bithiophene-5,5′-dicarboxaldehyde. The prepared azomethine diimides exhibited glass-forming properties. The obtained compounds emitted blue light with the emission maximum at 470 nm. The value of the absorption coefficient was determined as a function of the photon energy using spectroscopic ellipsometry. All compounds are electrochemically active and undergo reversible electrochemical reduction and irreversible oxidation processes as was found in cyclic voltammetry and differential pulse voltammetry (DPV) studies. They exhibited a low electrochemically (DPV) calculated energy band gap (Eg) from 1.14 to 1.70 eV. The highest occupied molecular orbital and lowest unoccupied molecular orbital levels and Eg were additionally calculated theoretically by density functional theory at the B3LYP/6-31G(d,p) level. The photovoltaic properties of two model compounds as the active layer in organic solar cells in the configuration indium tin oxide/poly(3,4-(ethylenedioxy)thiophene):poly(styrenesulfonate)/active layer/Al under an illumination of 1.3 mW/cm2 were studied. The device comprising poly(3-hexylthiophene) with the compound end-capped with bithiophene rings showed the highest value of Voc (above 1 V). The conversion efficiency of the fabricated solar cell was in the range of 0.69-0.90%

    Modelling LGD for unsecured retail loans using Bayesian methods

    No full text
    Loss Given Default (LGD) is the loss borne by the bank when a customer defaults on a loan. LGD for unsecured retail loans is often found difficult to model. In the frequentist (non-Bayesian) two-step approach, two separate regression models are estimated independently, which can be considered potentially problematic when trying to combine them to make predictions about LGD. The result is a point estimate of LGD for each loan. Alternatively, LGD can be modelled using Bayesian methods. In the Bayesian framework, one can build a single, hierarchical model instead of two separate ones, which makes this a more coherent approach. In this paper, Bayesian methods as well as the frequentist approach are applied to the data on personal loans provided by a large UK bank. As expected, the posterior means of parameters which have been produced using Bayesian methods are very similar to the frequentist estimates. The most important advantage of the Bayesian model is that it generates an individual predictive distribution of LGD for each loan. Potential applications of such distributions include the downturn LGD and the stressed LGD under Basel II

    Kalman filtering as a performance monitoring technique for a propensity scorecard

    No full text
    Propensity scorecards allow forecasting, which bank customers will soon be interested in new credits, through assessing their willingness to make application for new loans. Such scoring models are considered efficient tools to select customers for bank marketing campaigns. Kalman filtering can help monitoring scorecard performance. That technique is illustrated with an example of a propensity scorecard developed on the credit bureau data. Data coming from successive months are used to systematically update the baseline model. The updated scorecard is the output of the Kalman filter. As model parameters are estimated using commercial software dedicated to scorecard development, the estimator features are unknown. It is assumed that the estimator is unbiased and follows asymptotic normal distribution. The estimator variance is then derived from the bootstrap. The odds are defined as a measure of a customer’s willingness to apply for new loans, calculated as a ratio of the willing to the unwilling among customers having a given score. Once the scorecard is developed, an auxiliary linear model is estimated to find a relationship between the score and the natural logarithm of the odds for that score. That relationship is then used to determine the customer’s propensity level. Every month a new sample is scored with both the baseline and the updated scorecards. For each customer the log odds are estimated using the relationship between the baseline score and the willingness to apply for new loans. That estimate, which represents the customer’s propensity level provided that the baseline scorecard is still up-to-date, is then compared with the estimate computed using the relationship between the updated score and the log odds. The example demonstrates that a scorecard may become less and less up-to-date although the commonly used performance measures such as the Gini coefficient or the Kolmogorov-Smirnov statistic do not change considerabl

    LOTUS-based segmentation in credit scoring

    No full text
    Credit scoring allows for the credit risk assessment of bank customers. A single scoring model (scorecard) can be built for the entire customer population. However, dividing the population into several groups and building separate scorecards for them can yield better results (in particular, higher performance of the whole model). That division is referred to as segmentation and is widely used in banking practice. There are various segmentation methods. A few methods have recently been developed in an attempt to enable the optimal segmentation, i.e. such segmentation that would maximise the model performance. However, those methods still suffer from serious drawbacks such as the exhaustive search, predetermined number of segments or using the same set of variables in all scorecards. In this research, a new segmentation method is suggested, the Logistic Tree with Unbiased Selection (LOTUS) algorithm. The suggested method is derived from data mining. It is free from the above-mentioned drawbacks and allows for the optimal segmentation. It is tested using the data provided by one of the major UK banks and one of the European credit bureaus. For comparison purposes, some reference models are also developed using techniques that are popular in credit scoring (logistic regression and classification trees
    corecore